Nearly Optimal First-Order Methods for Convex Optimization under Gradient Norm Measure: an Adaptive Regularization Approach

نویسندگان

چکیده

In the development of first-order methods for smooth (resp., composite) convex optimization problems, where functions with Lipschitz continuous gradients are minimized, gradient mapping) norm becomes a fundamental optimality measure. Under this measure, fixed iteration algorithm optimal complexity case is known, while determining number to obtain desired accuracy requires prior knowledge distance from initial point solution set. paper, we report an adaptive regularization approach, which attains nearly without knowing To further faster convergence adaptively, secondly apply approach construct method that Hölderian error bound condition (or equivalently, ?ojasiewicz property), covers moderately wide classes applications. The proposed respect mapping norm.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

First-order Methods for Geodesically Convex Optimization

Geodesic convexity generalizes the notion of (vector space) convexity to nonlinear metric spaces. But unlike convex optimization, geodesically convex (g-convex) optimization is much less developed. In this paper we contribute to the understanding of g-convex optimization by developing iteration complexity analysis for several first-order algorithms on Hadamard manifolds. Specifically, we prove ...

متن کامل

An adaptive accelerated first-order method for convex optimization

In this paper, we present a new accelerated variant of Nesterov’s method for solving a class of convex optimization problems, in which certain acceleration parameters are adaptively (and aggressively) chosen so as to: preserve the theoretical iteration-complexity of the original method, and; substantially improve its practical performance in comparison to the other existing variants. Computatio...

متن کامل

Fast First-Order Methods for Composite Convex Optimization with Backtracking

We propose new versions of accelerated first order methods for convex composite optimization, where the prox parameter is allowed to increase from one iteration to the next. In particular we show that a full backtracking strategy can be used within the FISTA [1] and FALM algorithms [7] while preserving their worst-case iteration complexities of O( √ L(f)/ ). In the original versions of FISTA an...

متن کامل

Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ-strongly convex objective functions with L-Lipschitz continuous gradient. In the framework of Nesterov both μ and L are assumed known – an assumption that is seldom satisfied in pr...

متن کامل

Proximal and First-Order Methods for Convex Optimization

We describe the proximal method for minimization of convex functions. We review classical results, recent extensions, and interpretations of the proximal method that work in online and stochastic optimization settings.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Optimization Theory and Applications

سال: 2021

ISSN: ['0022-3239', '1573-2878']

DOI: https://doi.org/10.1007/s10957-020-01806-7